1,649 research outputs found

    Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems

    Get PDF
    Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems

    Anomalous diffusion due to hindering by mobile obstacles undergoing Brownian motion or Orstein-Ulhenbeck processes

    Get PDF
    In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report ''anomalous diffusion'', where mean-squared displacements scale as a power law of time with exponent α<1\alpha< 1 (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly-located \textit{immobile} obstacles. Here, we have used Monte-Carlo simulations to investigate transient subdiffusion due to \textit{mobile} obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g. Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power-laws with anomalous exponent α\alpha that varies with the density of OU obstacles or the relaxation time-scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in 2d. Therefore, our results show that subdiffusion due to mobile obstacles with OU-type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles.Comment: Physical Review E (2014

    Optimising the topology of complex neural networks

    Get PDF
    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution

    A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks

    Get PDF
    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest

    Localization of protein aggregation in Escherichia coli is governed by diffusion and nucleoid macromolecular crowding effect

    Get PDF
    Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3d individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion- aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of "soft" intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.Comment: PLoS Computational Biology (2013

    Chaos in computer performance

    Get PDF
    Modern computer microprocessors are composed of hundreds of millions of transistors that interact through intricate protocols. Their performance during program execution may be highly variable and present aperiodic oscillations. In this paper, we apply current nonlinear time series analysis techniques to the performances of modern microprocessors during the execution of prototypical programs. Our results present pieces of evidence strongly supporting that the high variability of the performance dynamics during the execution of several programs display low-dimensional deterministic chaos, with sensitivity to initial conditions comparable to textbook models. Taken together, these results show that the instantaneous performances of modern microprocessors constitute a complex (or at least complicated) system and would benefit from analysis with modern tools of nonlinear and complexity science

    A New Principle for Information Storage in an Enzymatic Pathway Model

    Get PDF
    Strong experimental evidence indicates that protein kinase and phosphatase (KP) cycles are critical to both the induction and maintenance of activity-dependent modifications in neurons. However, their contribution to information storage remains controversial, despite impressive modeling efforts. For instance, plasticity models based on KP cycles do not account for the maintenance of plastic modifications. Moreover, bistable KP cycle models that display memory fail to capture essential features of information storage: rapid onset, bidirectional control, graded amplitude, and finite lifetimes. Here, we show in a biophysical model that upstream activation of KP cycles, a ubiquitous mechanism, is sufficient to provide information storage with realistic induction and maintenance properties: plastic modifications are rapid, bidirectional, and graded, with finite lifetimes that are compatible with animal and human memory. The maintenance of plastic modifications relies on negligible reaction rates in basal conditions and thus depends on enzyme nonlinearity and activation properties of the activity-dependent KP cycle. Moreover, we show that information coding and memory maintenance are robust to stochastic fluctuations inherent to the molecular nature of activity-dependent KP cycle operation. This model provides a new principle for information storage where plasticity and memory emerge from a single dynamic process whose rate is controlled by neuronal activity. This principle strongly departs from the long-standing view that memory reflects stable steady states in biological systems, and offers a new perspective on memory in animals and humans

    Modelling the modulation of cortical Up-Down state switching by astrocytes

    Get PDF
    International audienceUp-Down synchronization in neuronal networks refers to spontaneous switches between periods of high collective firing activity (Up state) and periods of silence (Down state). Recent experimental reports have shown that astrocytes can control the emergence of such Up-Down regimes in neural networks, although the molecular or cellular mechanisms that are involved are still uncertain. Here we propose neural network models made of three populations of cells: excitatory neurons, inhibitory neurons and astrocytes, interconnected by synaptic and gliotransmission events, to explore how astrocytes can control this phenomenon. The presence of astrocytes in the models is indeed observed to promote the emergence of Up-Down regimes with realistic characteristics. Our models show that the difference of signalling timescales between astrocytes and neurons (seconds versus milliseconds) can induce a regime where the frequency of gliotransmission events released by the astrocytes does not synchronize with the Up and Down phases of the neurons, but remains essentially stable. However, these gliotransmission events are found to change the localization of the bifurcations in the parameter space so that with the addition of astrocytes, the network enters a bistability region of the dynamics that corresponds to Up-Down synchronization. Taken together, our work provides a theoretical framework to test scenarios and hypotheses on the modulation of Up-Down dynamics by gliotransmission from astrocytes

    A Sampling Method Focusing on Practicality

    Get PDF
    In the past few years, several research works have demonstrated that sampling can drastically speed up architecture simulation, and several of these sampling techniques are already largely used. However, for a sampling technique to be both easily and properly used, i.e., plugged and reliably used into many simulators with little or no effort or knowledge from the user, it must fulfill a number of conditions: it should require no hardware-dependent modification of the functional or timing simulator, it should simultaneously consider warm-up and sampling, while still delivering high speed and accuracy.\\ \indent The motivation for this article is that, with the advent of generic and modular simulation frameworks like ASIM, SystemC, LSE, MicroLib or UniSim, there is a need for sampling techniques with the aforementioned properties, i.e., which are almost entirely \emph{transparent} to the user and simulator agnostic. In this article, we propose a sampling technique focused more on transparency than on speed and accuracy, though the technique delivers almost state-of-the-art performance. Our sampling technique is a hardware-independent and integrated approach to warm-up and sampling; it requires no modification of the functional simulator and solely relies on the performance simulator for warm-up. We make the following contributions: (1) a technique for splitting the execution trace into a potentially very large number of variable-size regions to capture program dynamic control flow, (2) a clustering method capable of efficiently coping with such a large number of regions, (3) a \emph{budget}-based method for jointly considering warm-up and sampling costs, presenting them as a single parameter to the user, and for distributing the number of simulated instructions between warm-up and sampling based on the region partitioning and clustering information.\newline \indent Overall, the method achieves an accuracy/time tradeoff that is close to the best reported results using clustering-based sampling (though usually with perfect or hardware-dependent warm-up), with an average CPI error of 1.68\% and an average number of simulated instructions of 288 million instructions over the Spec benchmarks. The technique/tool can be readily applied to a wide range of benchmarks, architectures and simulators, and will be used as a sampling option of the UniSim modular simulation framework
    corecore